-
Notifications
You must be signed in to change notification settings - Fork 221
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix removal of storage variable with fatdb. #38
Conversation
It looks like this contributor signed our Contributor License Agreement. 👍 Many thanks, Parity Technologies CLA Bot |
@rphmeier explained the double hashing but I've already forgotten what it was. :/ It is indeed needed though. |
My first guess would be to get an uniform address space distribution (96 bits of hashdb key are not). |
There is something tricky with this removal : in our use case (archivedb) we do not want it to actually happen except at a single block level (so we can query at previous block) and it will not due to archivedb implementation. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't know much about this code but the changes LGTM. Regarding your questions of using fatdb with non-archive pruning, the fatdb should only be able to enumerate keys that are present right? So it's fine to remove the association regardless of the pruning mode?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@cheme could you add a test which inserts and then removes a key?
@cheme I agree that the current behaviour looks broken. I don't fully understand what the purpose of #[test]
fn insert_and_removal_leaves_the_db_empty() {
use hashdb::HashDB;
let mut db = MemoryDB::<KeccakHasher>::new();
let mut root = H256::new();
{
let mut t = FatDBMut::new(&mut db, &mut root);
t.insert(&[1], &[2]).unwrap();
t.remove(&[1]).unwrap();
}
assert_eq!(db.keys().len(), 0);
} |
@dvdplm should we publish |
Please reference openethereum/parity-ethereum#9213 when updating the dependency on parity-ethereum so that the original issue is closed. 👍 |
Yes. I have pushed out :) |
On openethereum/parity-ethereum#9180 use of fatdb with an archive node leads to this situation :
in a block :
(double delete at a address due to the intermediate insert done at another address -> break assertion of reference count max at 1 one an archive node).
So this fix fatdb to delete and insert at same address (I kept the scheme of double hashing because it is consistent with the fatdb query and existing test cases but I am not sure it is needed, if a reviewer can point out the need for this double hashing it interests me).